错误报告是软件开发中的常见文物。它们作为用户与开发人员通信有关使用发布版本的软件程序时遇到的问题的主频道。然而,在对问题的描述中,用户可以故意或不揭示漏洞。在典型的维护方案中,在准备纠正补丁时,开发团队优先考虑此类安全相关错误报告。然而,当安全相关性没有立即表达(例如,通过标签)或通过TRIAJIG团队迅速识别时,开放的安全相关错误报告可能成为攻击者可以利用以执行零日攻击的敏感信息的关键泄漏。为了支持Trizing Bug报告中的从业者,研究社区提出了检测安全相关错误报告的许多方法。近年来,报告了基于机器学习的这方面的方法,具有很有希望的表现。我们的工作侧重于这些方法,并重新审视其积木,为目前的成就提供全面的观点。为此,我们建立了一个大型实验数据集,并在特征集和学习算法中进行了广泛的实验。最终,我们的研究突出了不同的方法配置,从而产生最好的执行分类器。
translated by 谷歌翻译
压力溃疡在ICU患者中具有很高的患病率,但如果以初始阶段识别,则可预防。在实践中,布拉登规模用于分类高风险患者。本文通过使用MIMIC-III V1.4中可用的数据调查了在电子健康中使用机器学习记录数据的使用。制定了两个主要贡献:评估考虑在住宿期间所有预测的模型的新方法,以及用于机器学习模型的新培训方法。结果与现有技术相比,表现出卓越的性能;此外,所有型号在精密召回曲线中的每个工作点都超过了Braden刻度。 - - les \〜oes por按\〜ao possuem alta preval \ ^ encia em pacientes de Uti e s \〜ao preven \'iveis ao serem endicidificadas em Est \'agios Iniciais。 na pr \'atica materiza-se a escala de braden para classifica \ c {c} \〜ao de pacientes em risco。 Este Artigo Investiga o Uso de Apenizado de M \'Aquina Em Dados de Registros Eletr \ ^ Onicos Para Este Fim,Parir Da Base dados Mimic-III V1.4。 s \〜ao feitas duas contribui \ c {c} \〜oes principais:uma nova abordagem para a avalia \ c {c} \〜ao dos modelos e da escala da escala de braden levando em conta todas作为predi \ c {c} \ 〜oes feitas ao longo das interna \ c {c} \〜oes,euro novo m \'etodo de treinamento para os modelos de aprendizo de m \'aquina。 os结果os overidos superam o estado da arte everifica-se que os modelos superam意义a escala de braden em todos oS pontos de Opera \ c {c} \〜〜ao da curva de precis \〜ao por sensibilidade。
translated by 谷歌翻译
在这项工作中,研究了来自磁共振图像的脑年龄预测的深度学习技术,旨在帮助鉴定天然老化过程的生物标志物。生物标志物的鉴定可用于检测早期神经变性过程,以及预测与年龄相关或与非年龄相关的认知下降。在这项工作中实施并比较了两种技术:应用于体积图像的3D卷积神经网络和应用于从轴向平面的切片的2D卷积神经网络,随后融合各个预测。通过2D模型获得的最佳结果,其达到了3.83年的平均绝对误差。 - Neste Trabalho S \〜AO InvestigaDAS T \'Ecnicas de Aprendizado Profundo Para a previ \ c {c} \〜ate daade脑电站a partir de imagens de resson \ ^ ancia magn \'etica,Visando辅助Na Identifica \ c {C} \〜AO de BioMarcadores Do Processo Natural de Envelhecimento。一个identifica \ c {c} \〜ao de bioMarcarcores \'e \'util para a detec \ c {c} \〜ao de um processo neurodegenerativo em Est \'Agio无数,Al \'em de possibilitar Prever Um decl 'inio cognitivo relacionado ou n \〜ao \`一个懒惰。 Duas T \'ECICAS S \〜AO ImportyAdas E Comparadas Teste Trabalho:Uma Rede神经卷应3D APLICADA NA IMAGEM VOLUM \'ETRICA E UME REDE神经卷轴2D APLICADA A FATIAS DO PANIAS轴向,COM后面fus \〜AO DAS PREDI \ C {c} \ \ oes个人。 o Melhor ResultAdo Foi optido Pelo Modelo 2D,Que Alcan \ C {C} OU UM ERRO M \'EDIO ABSOLUTO DE 3.83 ANOS。
translated by 谷歌翻译
In this paper, we propose a novel technique, namely INVALIDATOR, to automatically assess the correctness of APR-generated patches via semantic and syntactic reasoning. INVALIDATOR reasons about program semantic via program invariants while it also captures program syntax via language semantic learned from large code corpus using the pre-trained language model. Given a buggy program and the developer-patched program, INVALIDATOR infers likely invariants on both programs. Then, INVALIDATOR determines that a APR-generated patch overfits if: (1) it violates correct specifications or (2) maintains errors behaviors of the original buggy program. In case our approach fails to determine an overfitting patch based on invariants, INVALIDATOR utilizes a trained model from labeled patches to assess patch correctness based on program syntax. The benefit of INVALIDATOR is three-fold. First, INVALIDATOR is able to leverage both semantic and syntactic reasoning to enhance its discriminant capability. Second, INVALIDATOR does not require new test cases to be generated but instead only relies on the current test suite and uses invariant inference to generalize the behaviors of a program. Third, INVALIDATOR is fully automated. We have conducted our experiments on a dataset of 885 patches generated on real-world programs in Defects4J. Experiment results show that INVALIDATOR correctly classified 79% overfitting patches, accounting for 23% more overfitting patches being detected by the best baseline. INVALIDATOR also substantially outperforms the best baselines by 14% and 19% in terms of Accuracy and F-Measure, respectively.
translated by 谷歌翻译
When robots learn reward functions using high capacity models that take raw state directly as input, they need to both learn a representation for what matters in the task -- the task ``features" -- as well as how to combine these features into a single objective. If they try to do both at once from input designed to teach the full reward function, it is easy to end up with a representation that contains spurious correlations in the data, which fails to generalize to new settings. Instead, our ultimate goal is to enable robots to identify and isolate the causal features that people actually care about and use when they represent states and behavior. Our idea is that we can tune into this representation by asking users what behaviors they consider similar: behaviors will be similar if the features that matter are similar, even if low-level behavior is different; conversely, behaviors will be different if even one of the features that matter differs. This, in turn, is what enables the robot to disambiguate between what needs to go into the representation versus what is spurious, as well as what aspects of behavior can be compressed together versus not. The notion of learning representations based on similarity has a nice parallel in contrastive learning, a self-supervised representation learning technique that maps visually similar data points to similar embeddings, where similarity is defined by a designer through data augmentation heuristics. By contrast, in order to learn the representations that people use, so we can learn their preferences and objectives, we use their definition of similarity. In simulation as well as in a user study, we show that learning through such similarity queries leads to representations that, while far from perfect, are indeed more generalizable than self-supervised and task-input alternatives.
translated by 谷歌翻译
The latent space of autoencoders has been improved for clustering image data by jointly learning a t-distributed embedding with a clustering algorithm inspired by the neighborhood embedding concept proposed for data visualization. However, multivariate tabular data pose different challenges in representation learning than image data, where traditional machine learning is often superior to deep tabular data learning. In this paper, we address the challenges of learning tabular data in contrast to image data and present a novel Gaussian Cluster Embedding in Autoencoder Latent Space (G-CEALS) algorithm by replacing t-distributions with multivariate Gaussian clusters. Unlike current methods, the proposed approach independently defines the Gaussian embedding and the target cluster distribution to accommodate any clustering algorithm in representation learning. A trained G-CEALS model extracts a quality embedding for unseen test data. Based on the embedding clustering accuracy, the average rank of the proposed G-CEALS method is 1.4 (0.7), which is superior to all eight baseline clustering and cluster embedding methods on seven tabular data sets. This paper shows one of the first algorithms to jointly learn embedding and clustering to improve multivariate tabular data representation in downstream clustering.
translated by 谷歌翻译
An unbiased scene graph generation (SGG) algorithm referred to as Skew Class-balanced Re-weighting (SCR) is proposed for considering the unbiased predicate prediction caused by the long-tailed distribution. The prior works focus mainly on alleviating the deteriorating performances of the minority predicate predictions, showing drastic dropping recall scores, i.e., losing the majority predicate performances. It has not yet correctly analyzed the trade-off between majority and minority predicate performances in the limited SGG datasets. In this paper, to alleviate the issue, the Skew Class-balanced Re-weighting (SCR) loss function is considered for the unbiased SGG models. Leveraged by the skewness of biased predicate predictions, the SCR estimates the target predicate weight coefficient and then re-weights more to the biased predicates for better trading-off between the majority predicates and the minority ones. Extensive experiments conducted on the standard Visual Genome dataset and Open Image V4 \& V6 show the performances and generality of the SCR with the traditional SGG models.
translated by 谷歌翻译
In this paper we discuss the theory used in the design of an open source lightmorphic signatures analysis toolkit (LSAT). In addition to providing a core functionality, the software package enables specific optimizations with its modular and customizable design. To promote its usage and inspire future contributions, LSAT is publicly available. By using a self-supervised neural network and augmented machine learning algorithms, LSAT provides an easy-to-use interface with ample documentation. The experiments demonstrate that LSAT improves the otherwise tedious and error-prone tasks of translating lightmorphic associated data into usable spectrograms, enhanced with parameter tuning and performance analysis. With the provided mathematical functions, LSAT validates the nonlinearity encountered in the data conversion process while ensuring suitability of the forecasting algorithms.
translated by 谷歌翻译
Detecting abrupt changes in data distribution is one of the most significant tasks in streaming data analysis. Although many unsupervised Change-Point Detection (CPD) methods have been proposed recently to identify those changes, they still suffer from missing subtle changes, poor scalability, or/and sensitive to noise points. To meet these challenges, we are the first to generalise the CPD problem as a special case of the Change-Interval Detection (CID) problem. Then we propose a CID method, named iCID, based on a recent Isolation Distributional Kernel (IDK). iCID identifies the change interval if there is a high dissimilarity score between two non-homogeneous temporal adjacent intervals. The data-dependent property and finite feature map of IDK enabled iCID to efficiently identify various types of change points in data streams with the tolerance of noise points. Moreover, the proposed online and offline versions of iCID have the ability to optimise key parameter settings. The effectiveness and efficiency of iCID have been systematically verified on both synthetic and real-world datasets.
translated by 谷歌翻译
Artificial Intelligence (AI) has become commonplace to solve routine everyday tasks. Because of the exponential growth in medical imaging data volume and complexity, the workload on radiologists is steadily increasing. We project that the gap between the number of imaging exams and the number of expert radiologist readers required to cover this increase will continue to expand, consequently introducing a demand for AI-based tools that improve the efficiency with which radiologists can comfortably interpret these exams. AI has been shown to improve efficiency in medical-image generation, processing, and interpretation, and a variety of such AI models have been developed across research labs worldwide. However, very few of these, if any, find their way into routine clinical use, a discrepancy that reflects the divide between AI research and successful AI translation. To address the barrier to clinical deployment, we have formed MONAI Consortium, an open-source community which is building standards for AI deployment in healthcare institutions, and developing tools and infrastructure to facilitate their implementation. This report represents several years of weekly discussions and hands-on problem solving experience by groups of industry experts and clinicians in the MONAI Consortium. We identify barriers between AI-model development in research labs and subsequent clinical deployment and propose solutions. Our report provides guidance on processes which take an imaging AI model from development to clinical implementation in a healthcare institution. We discuss various AI integration points in a clinical Radiology workflow. We also present a taxonomy of Radiology AI use-cases. Through this report, we intend to educate the stakeholders in healthcare and AI (AI researchers, radiologists, imaging informaticists, and regulators) about cross-disciplinary challenges and possible solutions.
translated by 谷歌翻译